The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task. Despite the promising performance, previous PLM-based approaches often suffer from hallucination problems due to their negligence of the structural information contained in the sentence, which essentially constitutes the key semantics of the logical forms. Furthermore, most works treat PLM as a black box in which the generation process of the target logical form is hidden beneath the decoder modules, which greatly hinders the model's intrinsic interpretability. To address these two issues, we propose to incorporate the current PLMs with a hierarchical decoder network. By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks, namely Semantic Anchor Extraction and Semantic Anchor Alignment, for training the hierarchical decoders and probing the model intermediate representations in a self-adaptive manner alongside the fine-tuning process. We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines. More importantly, by analyzing the intermediate representations of the hierarchical decoders, our approach also makes a huge step toward the intrinsic interpretability of PLMs in the domain of semantic parsing.
translated by 谷歌翻译
图形神经网络(GNN)是用于建模图数据的流行机器学习方法。许多GNN在同质图上表现良好,同时在异质图上表现不佳。最近,一些研究人员将注意力转移到设计GNN,以通过调整消息传递机制或扩大消息传递的接收场来设计GNN。与从模型设计的角度来减轻异性疾病问题的现有作品不同,我们建议通过重新布线结构来从正交角度研究异质图,以减少异质性并使传统GNN的表现更好。通过全面的经验研究和分析,我们验证了重新布线方法的潜力。为了充分利用其潜力,我们提出了一种名为Deep Hertophilly Graph Rewiring(DHGR)的方法,以通过添加同粒子边缘和修剪异质边缘来重新线图。通过比较节点邻居的标签/特征 - 分布的相似性来确定重新布线的详细方法。此外,我们为DHGR设计了可扩展的实现,以确保高效率。 DHRG可以轻松地用作任何GNN的插件模块,即图形预处理步骤,包括同型和异性的GNN,以提高其在节点分类任务上的性能。据我们所知,这是研究图形的第一部重新绘图图形的作品。在11个公共图数据集上进行的广泛实验证明了我们提出的方法的优势。
translated by 谷歌翻译
本文介绍了使用变压器基于目标扬声器语音活动检测(TS-VAD)的扬声器诊断模型。为了克服原始的TS-VAD模型无法处理任意数量的扬声器的缺点,我们研究了使用具有可变长度时间和扬声器尺寸的输入张量的模型架构。将变压器层应用于扬声器轴,以使模型输出对提供给TS-VAD模型的扬声器配置文件的顺序不敏感。时间顺序层插入了这些说话者的变压器层之间,以允许捕获输入语音信号的时间和跨语言器相关性。我们还使用基于编码器的吸引子(EEND-EDA)将基于端到端神经诊断的诊断模型通过基于变压器的TS-VAD替换其基于DOT的扬声器检测层,从而扩展了基于端到端的神经腹泻。 VoxConverse上的实验结果表明,使用变压器进行跨言扬声器建模可将TS-VAD的诊断错误率(DER)降低10.9%,从而使新的最先进(SOTA)DER达到4.74%。此外,我们的扩展eDa-eda在呼叫者数据集上相对于原始eend-eda的模型大小将6.9%降低了6.9%,在广泛使用的培训数据设置下,新的SOTA DER为11.18%。
translated by 谷歌翻译
图形神经网络(GNN)通过汇总邻居的信息在图表中显示出表达性能。最近,一些研究讨论了在图上建模邻域分布的重要性。但是,大多数现有的GNN通过单个统计量(例如,均值,最大,sum)汇总了邻居的特征,该特征失去了与邻居特征分布相关的信息,因此会降低模型性能。在本文中,受统计理论的力矩方法的启发,我们建议用多阶矩对邻居的特征分布进行建模。我们设计了一种新型的GNN模型,即混合矩图神经网络(MM-gnn),其中包括一个多阶矩嵌入(MME)模块和一个基于元素的注意力矩适配器模块。 MM-gnn首先将每个节点的邻居的多阶矩作为签名计算,然后使用基于元素的注意力矩适配器将较大的权重分配给每个节点的重要矩和更新节点表示。我们对15个真实图表(包括社交网络,引文网络和网页网络等)进行了广泛的实验,以评估我们的模型,结果证明了MM-GNN优于现有最先进模型的优势。
translated by 谷歌翻译
视频3D人类姿势估计旨在将视频中人类关节的3D坐标定位。最近的基于变压器的方法着重于从顺序2D姿势捕获时空信息,由于在2D姿势估计的步骤中丢失了视觉深度特征,因此无法有效地对上下文深度特征进行建模。在本文中,我们将范式简化为端到端框架,实例引导的视频变压器(IVT),该范式可以有效地从视觉特征中学习时空的上下文深度信息,并直接从视频框架中预测3D姿势。特别是,我们首先将视频框架作为一系列实例引导令牌,每个令牌都可以预测人类实例的3D姿势。这些令牌包含身体结构信息,因为它们是由关节偏移从人体中心到相应身体关节的指导提取的。然后,这些令牌被发送到IVT中,以学习时空的上下文深度。此外,我们提出了一种跨尺度实例引导的注意机制,以处理多个人之间的变异量表。最后,每个人的3D姿势都是通过坐标回归从实例引导的代币中解码的。在三个广泛使用的3D姿势估计基准上进行的实验表明,拟议的IVT实现了最先进的性能。
translated by 谷歌翻译
近年来,随着新颖的策略和应用,神经网络一直在迅速扩展。然而,尽管不可避免地会针对关键应用程序来解决这些挑战,例如神经网络技术诸如神经网络技术中仍未解决诸如神经网络技术的挑战。已经尝试通过用符号表示来表示和嵌入域知识来克服神经网络计算中的挑战。因此,出现了神经符号学习(Nesyl)概念,其中结合了符号表示的各个方面,并将常识带入神经网络(Nesyl)。在可解释性,推理和解释性至关重要的领域中,例如视频和图像字幕,提问和推理,健康信息学和基因组学,Nesyl表现出了有希望的结果。这篇综述介绍了一项有关最先进的Nesyl方法的全面调查,其原理,机器和深度学习算法的进步,诸如Opthalmology之类的应用以及最重要的是该新兴领域的未来观点。
translated by 谷歌翻译
鉴于探索性数据分析的日益普及(EDA),了解EDA获得的知识的基本原因至关重要,但仍未进行研究。这项研究首次促进了对数据分析的透明且可解释的观点,称为可解释的数据分析(XDA)。 XDA提供了有关因果和非因果语义的定性和定量解释的数据分析。这样,XDA将显着提高人类对数据分析结果的理解和信心,从而促进现实世界中准确的数据解释和决策。为此,我们提出Xinsight,这是XDA的一般框架。 Xinsight是一种旨在提取因果图,将因果原语转化为XDA语义的三模块,端到端管道,并量化每个解释对数据事实的定量贡献。 Xinsight使用一组设计概念和优化来解决与将因果集成到XDA中相关的固有困难。关于合成和现实世界数据集以及人类评估的实验证明了Xinsight的高度有希望的能力。
translated by 谷歌翻译
多人3D姿势估计是一项具有挑战性的任务,因为遮挡和深度歧义,尤其是在人群场景的情况下。为了解决这些问题,大多数现有方法通过使用图神经网络增强特征表示或添加结构约束来探索建模身体上下文提示。但是,这些方法对于它们的单根公式并不强大,该公式将3D从根节点带有预定义的图形。在本文中,我们提出了GR-M3D,该GR-M3D模拟了\ textbf {m} ulti-person \ textbf {3d}构成构成构成效果估计,并使用动态\ textbf {g} raph \ textbf {r textbf {r} eSounting。预测GR-M3D中的解码图而不是预定。特别是,它首先生成几个数据图,并通过刻度和深度意识到的细化模块(SDAR)增强它们。然后从这些数据图估算每个人的多个根关键点和密集的解码路径。基于它们,动态解码图是通过将路径权重分配给解码路径来构建的,而路径权重是从这些增强的数据图推断出来的。此过程被命名为动态图推理(DGR)。最后,根据每个检测到的人的动态解码图对3D姿势进行解码。 GR-M3D可以根据输入数据采用软路径权重,通过采用软路径权重来调整解码图的结构,这使得解码图最能适应不同的输入人员,并且比以前的方法更有能力处理闭塞和深度歧义。我们从经验上表明,提出的自下而上方法甚至超过自上而下的方法,并在三个3D姿势数据集上实现最先进的方法。
translated by 谷歌翻译
Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos. Existing VSR techniques usually recover HR frames by extracting pertinent textures from nearby frames with known degradation processes. Despite significant progress, grand challenges are remained to effectively extract and transmit high-quality textures from high-degraded low-quality sequences, such as blur, additive noises, and compression artifacts. In this work, a novel Frequency-Transformer (FTVSR) is proposed for handling low-quality videos that carry out self-attention in a combined space-time-frequency domain. First, video frames are split into patches and each patch is transformed into spectral maps in which each channel represents a frequency band. It permits a fine-grained self-attention on each frequency band, so that real visual texture can be distinguished from artifacts. Second, a novel dual frequency attention (DFA) mechanism is proposed to capture the global frequency relations and local frequency relations, which can handle different complicated degradation processes in real-world scenarios. Third, we explore different self-attention schemes for video processing in the frequency domain and discover that a ``divided attention'' which conducts a joint space-frequency attention before applying temporal-frequency attention, leads to the best video enhancement quality. Extensive experiments on three widely-used VSR datasets show that FTVSR outperforms state-of-the-art methods on different low-quality videos with clear visual margins. Code and pre-trained models are available at https://github.com/researchmm/FTVSR.
translated by 谷歌翻译
Transformers are widely used in NLP tasks. However, current approaches to leveraging transformers to understand language expose one weak spot: Number understanding. In some scenarios, numbers frequently occur, especially in semi-structured data like tables. But current approaches to rich-number tasks with transformer-based language models abandon or lose some of the numeracy information - e.g., breaking numbers into sub-word tokens - which leads to many number-related errors. In this paper, we propose the LUNA framework which improves the numerical reasoning and calculation capabilities of transformer-based language models. With the number plugin of NumTok and NumBed, LUNA represents each number as a whole to model input. With number pre-training, including regression loss and model distillation, LUNA bridges the gap between number and vocabulary embeddings. To the best of our knowledge, this is the first work that explicitly injects numeracy capability into language models using Number Plugins. Besides evaluating toy models on toy tasks, we evaluate LUNA on three large-scale transformer models (RoBERTa, BERT, TabBERT) over three different downstream tasks (TATQA, TabFact, CrediTrans), and observe the performances of language models are constantly improved by LUNA. The augmented models also improve the official baseline of TAT-QA (EM: 50.15 -> 59.58) and achieve SOTA performance on CrediTrans (F1 = 86.17).
translated by 谷歌翻译